Task and Model Agnostic Adversarial Attack on Graph Neural Networks

نویسندگان

چکیده

Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting adoption in safety-critical applications. However, existing attack strategies rely the knowledge of either GNN model being used or predictive task attacked. Is this necessary? For example, a graph may be for multiple downstream tasks unknown to practical attacker. It is thus important test vulnerability GNNs adversarial perturbations and task-agnostic setting. In work, we study problem show that Gnns remain vulnerable even when are unknown. The proposed algorithm, TANDIS (Targeted Attack via Neighborhood DIStortion) shows distortion node neighborhoods effective drastically compromising prediction performance. Although neighborhood an NP-hard problem, designs heuristic through novel combination Isomorphism Network with deep Q-learning. Extensive experiments real datasets that, average, up 50% more than state-of-the-art techniques, while 1000 times faster.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ranking Attack Graphs with Graph Neural Networks

Network security analysis based on attack graphs has been applied extensively in recent years. The ranking of nodes in an attack graph is an important step towards analyzing network security. This paper proposes an alternative attack graph ranking scheme based on a recent approach to machine learning in a structured graph domain, namely, Graph Neural Networks (GNNs). Evidence is presented in th...

متن کامل

Learning Social Graph Topologies using Generative Adversarial Neural Networks

Although sources of social media data abound, companies are often reluctant to share data, even anonymized or aggregated, for fear of violating user privacy. This paper introduces an approach for learning the probability of link formation from data using generative adversarial neural networks. In our generative adversarial network (GAN) paradigm, one neural network is trained to generate the gr...

متن کامل

Snowflake: A Model Agnostic Accelerator for Deep Convolutional Neural Networks

Deep convolutional neural networks (CNNs) are the deep learning model of choice for performing object detection, classification, semantic segmentation and natural language processing tasks. CNNs require billions of operations to process a frame. This computational complexity, combined with the inherent parallelism of the convolution operation make CNNs an excellent target for custom accelerator...

متن کامل

Domain-Adversarial Neural Networks

We introduce a new representation learning algorithm suited to the context of domain adaptation, in which data at training and test time come from similar but different distributions. Our algorithm is directly inspired by theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on a data representation that cannot discriminate bet...

متن کامل

Trojaning Attack on Neural Networks

With the fast spread of machine learning techniques, sharing and adopting public machine learning models become very popular. This gives attackers many new opportunities. In this paper, we propose a trojaning attack on neuron networks. As the models are not intuitive for human to understand, the attack features stealthiness. Deploying trojaned models can cause various severe consequences includ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i12.26761